I agree that the risk of war is concentrated in changes in political conditions, and that the post-Cold War trough in conflict is too small to draw inferences from. Re the tentative trend, Pinker’s assembled evidence goes back a long time, and covers many angles. It may fail to continue, and a nuclear war could change conditions thereafter, but there are many data points over time. If you want to give detail, feel free.
I would prefer to use representative expert opinion data from specialists in all the related fields (the nuclear scientists, political scientists, diplomats, etc), and the the work of panels trying to assess the problem, and would defer to expert consensus in their various areas of expertise (as with the climate science). But one can’t update on views that have not been made known. Martin Hellman has called for an organized effort to estimate the risk, but without success as yet. I have been raising the task of better eliciting expert opinion and improving forecasting in this area, and worked to get it on the agenda at the FHI (as I did re the FHI survey of the most cited AI academics) and at other organizations. Where I have found information about experts’ views I shared it.
And re: Pinker: If you had a bit more experience with trends on a necessarily very noisy data—you would realize that such trends are virtually irrelevant with regards to the probability of encountering some extremes (especially when those are not even that extreme—preceding the cold war, you have Hitler). It’s the exact same mistake committed by particularly low brow republicans when they go on about “ha ha, global warming” during a cold spell—because they think that a trend in noisy data has huge impact on individual data points.
edit: furthermore, Pinker’s data is on violence per capita—the total violence increased, it’s just that the violence seems to scale sub-linearly with population. Population is growing, as well as the number of states with nuclear weapons.
By total violence I mean the number of people dying (due to wars and other violence). The rate of wars, given the huge variation in the war size, is not a very useful metric.
I frankly don’t see how, having on one hand trends by Pinker, and on the other hand, adoption of modern technologies in the regions far behind on any such trends, and developments of new technologies, you have the trends by Pinker outweight that.
On the general change, for 2100, we’re speaking of 86 years. That’s the time span in which Russian Empire of 1900 transformed to Soviet Union of 1986 , complete with two world wars and invention of nuclear weapons followed by thermonuclear weapons.
That’s a time span more than long enough for it to be far more likely than not that entirely unpredictable technological advancements will be made in multitude of fields that have impact on the ease and cost of manufacturing of nuclear weapons. Enrichment is incredibly inefficient, with a huge room for improvement. Go read the wikipedia page on enrichment, then assume a much larger number of methods which could be improved. Conditional on continued progress, of course.
The political changes that happen in that sort of timespan are even less predictable.
Ultimately, what you have is that the estimates should regress towards ignorance prior over time.
Now as for the “existential risk” rhetoric… The difference between 9.9 billions dying out of 10 billions, and 9.9 billions dying out of 9.9 billions, is primarily aesthetic in nature. It’s promoted as the supreme moral difference primarily by people with other agendas, such as “making a living from futurist speculation”.
Now as for the “existential risk” rhetoric… The difference between 9.9 billions dying out of 10 billions, and 9.9 billions dying out of 9.9 billions, is primarily aesthetic in nature. It’s promoted as the supreme moral difference primarily by people with other agendas, such as “making a living from futurist speculation”.
Not if you care about future generations. If everybody dies, there are no future generations. If 100 million people survive, you can possibly rebuild civilization.
(If the 100 million eventually die out too, without finding any way to sustain the species, and it just takes longer, that’s still an existential catastrophe.)
Not if you care about future generations. If everybody dies, there are no future generations. If 100 million people survive, you can possibly rebuild civilization.
I care about the well being of the future people, but not their mere existence. As do most people who don’t disapprove of birth control but do disapprove of, for example, drinking while pregnant.
Let’s postulate a hypothetical tiny universe, where you have Adam and Eve except they are sort of like horse and donkey—any children they’ll have are certain to be sterile. The food is plentiful etc etc. Is it supremely important that they have a large number of (certainly sterile) children?
Declare conflict of interest at least, so everyone can ignore you when you say that the “existential risk” due to nuclear war is small, or when you define the “existential risk” in the first place just to create a big new scary category which you can argue is dominated by AI risk.
With regards to wide trends, there’s a: big uncertainty that the trend in question even meaningfully exists (and is not a consequence of e.g. longer recovery times after wars due to increased severity), and b: its sort of like using global warming to try to estimate how cold the cold spells can get. The problem with cold war, is that things could be a lot worse than cold war, and indeed were not that long ago (surely no leader in the cold war was even remotely as bad as Hitler).
Likewise, the model uncertainty for the consequences of the total war between nuclear superpowers (who are also bioweapon superpowers etc etc) is huge. We get thrown back, and all the big predatory and prey species get extinct, opening up new evolutionary niches for us primates to settle into. Do you think we just nuke each other a little and shake hands afterwards?
You convert this huge uncertainty into as low existential risk as you can possibly bend things without consciously thinking of yourself as acting in bad faith.
You do exact same thing with the consequences of, say, “hard takeoff”, in the other direction, where the model uncertainty is very high too. I don’t even believe that hard takeoff of an expected utility maximizer (as opposed to magical utility maximizer which does not have any hypotheses that are not empirically distinguishable, but instead knows everything exactly) is that much of an existential risk to begin with. AI’s decision making core can not ever be sure it’s not some sort of test run (which may not even be fully simulating the AI).
In unit tests killing the creators is going to be likely to get you terminated and tweaked.
The point is there is a very huge model uncertainty about even the paperclip maximizer killing all humans (and far larger uncertainty about the relevance), but you aren’t pushing it in the lower direction with same prejudice as you do for the consequences of the nuclear war.
Then there’s the question, existence of what has to be at risk for you to use the phrase “existential risk”? The whole universe? Earth originating intelligence in general? Earth originating biological intelligences? Human-originated intelligences? What’s about continued existence of our culture and our values? Clearly the exact definition that you’re going to use is carefully picked here as to promote pet issues. Could’ve been the existence of the universe, given a pet issue of future accelerators triggering vacuum decay.
You have fully convinced me that giving money towards self proclaimed “existential risk research” (in reality, funding creation of disinformation and biasing, easily identified by the fact that it’s not “risk” but “existential risk”) has negative utility in terms of anything I or most people on the Earth actually value. Give you much more money and you’ll fund a nuclear winter denial campaign. Nuclear war is old and boring, robots are new and shiny...
edit: and to counter a known objection that “existential risk” may be raising awareness for other types of risks as a side effect. It’s a market, the decisions what to buy and what not to buy influence the kind of research that is supplied.
I agree that the risk of war is concentrated in changes in political conditions, and that the post-Cold War trough in conflict is too small to draw inferences from. Re the tentative trend, Pinker’s assembled evidence goes back a long time, and covers many angles. It may fail to continue, and a nuclear war could change conditions thereafter, but there are many data points over time. If you want to give detail, feel free.
I would prefer to use representative expert opinion data from specialists in all the related fields (the nuclear scientists, political scientists, diplomats, etc), and the the work of panels trying to assess the problem, and would defer to expert consensus in their various areas of expertise (as with the climate science). But one can’t update on views that have not been made known. Martin Hellman has called for an organized effort to estimate the risk, but without success as yet. I have been raising the task of better eliciting expert opinion and improving forecasting in this area, and worked to get it on the agenda at the FHI (as I did re the FHI survey of the most cited AI academics) and at other organizations. Where I have found information about experts’ views I shared it.
Carl, Dymytry/private_messaging is a known troll, and not worth your time to respond to.
And re: Pinker: If you had a bit more experience with trends on a necessarily very noisy data—you would realize that such trends are virtually irrelevant with regards to the probability of encountering some extremes (especially when those are not even that extreme—preceding the cold war, you have Hitler). It’s the exact same mistake committed by particularly low brow republicans when they go on about “ha ha, global warming” during a cold spell—because they think that a trend in noisy data has huge impact on individual data points.
edit: furthermore, Pinker’s data is on violence per capita—the total violence increased, it’s just that the violence seems to scale sub-linearly with population. Population is growing, as well as the number of states with nuclear weapons.
Did you not read the book? He shows big declines in rates of wars, not just per capita damage from war.
By total violence I mean the number of people dying (due to wars and other violence). The rate of wars, given the huge variation in the war size, is not a very useful metric.
I frankly don’t see how, having on one hand trends by Pinker, and on the other hand, adoption of modern technologies in the regions far behind on any such trends, and developments of new technologies, you have the trends by Pinker outweight that.
On the general change, for 2100, we’re speaking of 86 years. That’s the time span in which Russian Empire of 1900 transformed to Soviet Union of 1986 , complete with two world wars and invention of nuclear weapons followed by thermonuclear weapons.
That’s a time span more than long enough for it to be far more likely than not that entirely unpredictable technological advancements will be made in multitude of fields that have impact on the ease and cost of manufacturing of nuclear weapons. Enrichment is incredibly inefficient, with a huge room for improvement. Go read the wikipedia page on enrichment, then assume a much larger number of methods which could be improved. Conditional on continued progress, of course.
The political changes that happen in that sort of timespan are even less predictable.
Ultimately, what you have is that the estimates should regress towards ignorance prior over time.
Now as for the “existential risk” rhetoric… The difference between 9.9 billions dying out of 10 billions, and 9.9 billions dying out of 9.9 billions, is primarily aesthetic in nature. It’s promoted as the supreme moral difference primarily by people with other agendas, such as “making a living from futurist speculation”.
Not if you care about future generations. If everybody dies, there are no future generations. If 100 million people survive, you can possibly rebuild civilization.
(If the 100 million eventually die out too, without finding any way to sustain the species, and it just takes longer, that’s still an existential catastrophe.)
I care about the well being of the future people, but not their mere existence. As do most people who don’t disapprove of birth control but do disapprove of, for example, drinking while pregnant.
Let’s postulate a hypothetical tiny universe, where you have Adam and Eve except they are sort of like horse and donkey—any children they’ll have are certain to be sterile. The food is plentiful etc etc. Is it supremely important that they have a large number of (certainly sterile) children?
Declare conflict of interest at least, so everyone can ignore you when you say that the “existential risk” due to nuclear war is small, or when you define the “existential risk” in the first place just to create a big new scary category which you can argue is dominated by AI risk.
With regards to wide trends, there’s a: big uncertainty that the trend in question even meaningfully exists (and is not a consequence of e.g. longer recovery times after wars due to increased severity), and b: its sort of like using global warming to try to estimate how cold the cold spells can get. The problem with cold war, is that things could be a lot worse than cold war, and indeed were not that long ago (surely no leader in the cold war was even remotely as bad as Hitler).
Likewise, the model uncertainty for the consequences of the total war between nuclear superpowers (who are also bioweapon superpowers etc etc) is huge. We get thrown back, and all the big predatory and prey species get extinct, opening up new evolutionary niches for us primates to settle into. Do you think we just nuke each other a little and shake hands afterwards?
You convert this huge uncertainty into as low existential risk as you can possibly bend things without consciously thinking of yourself as acting in bad faith.
You do exact same thing with the consequences of, say, “hard takeoff”, in the other direction, where the model uncertainty is very high too. I don’t even believe that hard takeoff of an expected utility maximizer (as opposed to magical utility maximizer which does not have any hypotheses that are not empirically distinguishable, but instead knows everything exactly) is that much of an existential risk to begin with. AI’s decision making core can not ever be sure it’s not some sort of test run (which may not even be fully simulating the AI).
In unit tests killing the creators is going to be likely to get you terminated and tweaked.
The point is there is a very huge model uncertainty about even the paperclip maximizer killing all humans (and far larger uncertainty about the relevance), but you aren’t pushing it in the lower direction with same prejudice as you do for the consequences of the nuclear war.
Then there’s the question, existence of what has to be at risk for you to use the phrase “existential risk”? The whole universe? Earth originating intelligence in general? Earth originating biological intelligences? Human-originated intelligences? What’s about continued existence of our culture and our values? Clearly the exact definition that you’re going to use is carefully picked here as to promote pet issues. Could’ve been the existence of the universe, given a pet issue of future accelerators triggering vacuum decay.
You have fully convinced me that giving money towards self proclaimed “existential risk research” (in reality, funding creation of disinformation and biasing, easily identified by the fact that it’s not “risk” but “existential risk”) has negative utility in terms of anything I or most people on the Earth actually value. Give you much more money and you’ll fund a nuclear winter denial campaign. Nuclear war is old and boring, robots are new and shiny...
edit: and to counter a known objection that “existential risk” may be raising awareness for other types of risks as a side effect. It’s a market, the decisions what to buy and what not to buy influence the kind of research that is supplied.